527 research outputs found

    Anti-Fall: A Non-intrusive and Real-time Fall Detector Leveraging CSI from Commodity WiFi Devices

    Full text link
    Fall is one of the major health threats and obstacles to independent living for elders, timely and reliable fall detection is crucial for mitigating the effects of falls. In this paper, leveraging the fine-grained Channel State Information (CSI) and multi-antenna setting in commodity WiFi devices, we design and implement a real-time, non-intrusive, and low-cost indoor fall detector, called Anti-Fall. For the first time, the CSI phase difference over two antennas is identified as the salient feature to reliably segment the fall and fall-like activities, both phase and amplitude information of CSI is then exploited to accurately separate the fall from other fall-like activities. Experimental results in two indoor scenarios demonstrate that Anti-Fall consistently outperforms the state-of-the-art approach WiFall, with 10% higher detection rate and 10% less false alarm rate on average.Comment: 13 pages,8 figures,corrected version, ICOST conferenc

    Implicit surfaces with globally regularised and compactly supported basis functions

    Get PDF
    We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data

    DC-Prophet: Predicting Catastrophic Machine Failures in DataCenters

    Full text link
    When will a server fail catastrophically in an industrial datacenter? Is it possible to forecast these failures so preventive actions can be taken to increase the reliability of a datacenter? To answer these questions, we have studied what are probably the largest, publicly available datacenter traces, containing more than 104 million events from 12,500 machines. Among these samples, we observe and categorize three types of machine failures, all of which are catastrophic and may lead to information loss, or even worse, reliability degradation of a datacenter. We further propose a two-stage framework-DC-Prophet-based on One-Class Support Vector Machine and Random Forest. DC-Prophet extracts surprising patterns and accurately predicts the next failure of a machine. Experimental results show that DC-Prophet achieves an AUC of 0.93 in predicting the next machine failure, and a F3-score of 0.88 (out of 1). On average, DC-Prophet outperforms other classical machine learning methods by 39.45% in F3-score.Comment: 13 pages, 5 figures, accepted by 2017 ECML PKD

    Forecasting: Adopting the Methodology of Support Vector Machines to Nursing Research

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71569/1/j.1741-6787.2006.00062.x.pd

    Active Sampling-based Binary Verification of Dynamical Systems

    Full text link
    Nonlinear, adaptive, or otherwise complex control techniques are increasingly relied upon to ensure the safety of systems operating in uncertain environments. However, the nonlinearity of the resulting closed-loop system complicates verification that the system does in fact satisfy those requirements at all possible operating conditions. While analytical proof-based techniques and finite abstractions can be used to provably verify the closed-loop system's response at different operating conditions, they often produce conservative approximations due to restrictive assumptions and are difficult to construct in many applications. In contrast, popular statistical verification techniques relax the restrictions and instead rely upon simulations to construct statistical or probabilistic guarantees. This work presents a data-driven statistical verification procedure that instead constructs statistical learning models from simulated training data to separate the set of possible perturbations into "safe" and "unsafe" subsets. Binary evaluations of closed-loop system requirement satisfaction at various realizations of the uncertainties are obtained through temporal logic robustness metrics, which are then used to construct predictive models of requirement satisfaction over the full set of possible uncertainties. As the accuracy of these predictive statistical models is inherently coupled to the quality of the training data, an active learning algorithm selects additional sample points in order to maximize the expected change in the data-driven model and thus, indirectly, minimize the prediction error. Various case studies demonstrate the closed-loop verification procedure and highlight improvements in prediction error over both existing analytical and statistical verification techniques.Comment: 23 page

    Kernel-based independence tests for causal structure learning on functional data

    Get PDF
    Measurements of systems taken along a continuous functional dimension, such as time or space, are ubiquitous in many fields, from the physical and biological sciences to economics and engineering. Such measurements can be viewed as realisations of an underlying smooth process sampled over the continuum. However, traditional methods for independence testing and causal learning are not directly applicable to such data, as they do not take into account the dependence along the functional dimension. By using specifically designed kernels, we introduce statistical tests for bivariate, joint, and conditional independence for functional variables. Our method not only extends the applicability to functional data of the Hilbert–Schmidt independence criterion (hsic) and its d-variate version (d-hsic), but also allows us to introduce a test for conditional independence by defining a novel statistic for the conditional permutation test (cpt) based on the Hilbert–Schmidt conditional independence criterion (hscic), with optimised regularisation strength estimated through an evaluation rejection rate. Our empirical results of the size and power of these tests on synthetic functional data show good performance, and we then exemplify their application to several constraint- and regression-based causal structure learning problems, including both synthetic examples and real socioeconomic data

    Domain adaptation with conditional transferable components

    Full text link
    © 2016 by the author(s). Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distribution- s. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y\X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components T(X) that have similar P(T(X)) on different domains by explicitly minimizing a distribution discrepancy measure. However, it is not clear if P(Y\T(X)) in different domains is also similar when P(Y/X)changes. Furthermore, transferable components do not necessarily have to be invariant. If the change in some components is identifiable, we can make use of such components for prediction in the target domain. In this paper, we focus on the case where P{X ,Y) and P(Y') both change in a causal system in which Y is the cause for X. Under appropriate assumptions, we aim to extract conditional transferable components whose conditional distribution P(T{X)\Y) is invariant after proper location-scale (LS) transformations, and identify how P{Y) changes between domains simultaneously. We provide theoretical analysis and empirical evaluation on both synthetic and real-world data to show the effectiveness of our method

    A learning approach to 3d object representation for classification

    Get PDF
    Abstract. In this paper we describe our 3D object signature for 3D object classification. The signature is based on a learning approach that finds salient points on a 3D object and represent these points in a 2D spatial map based on a longitude-latitude transformation. Experimental results show high classification rates on both pose-normalized and rotated objects and include a study on classification accuracy as a function of number of rotations in the training set

    Supervised inference of gene-regulatory networks

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Inference of protein interaction networks from various sources of data has become an important topic of both systems and computational biology. Here we present a supervised approach to identification of gene expression regulatory networks.</p> <p>Results</p> <p>The method is based on a kernel approach accompanied with genetic programming. As a data source, the method utilizes gene expression time series for prediction of interactions among regulatory proteins and their target genes. The performance of the method was verified using Saccharomyces cerevisiae cell cycle and DNA/RNA/protein biosynthesis gene expression data. The results were compared with independent data sources. Finally, a prediction of novel interactions within yeast gene expression circuits has been performed.</p> <p>Conclusion</p> <p>Results show that our algorithm gives, in most cases, results identical with the independent experiments, when compared with the YEASTRACT database. In several cases our algorithm gives predictions of novel interactions which have not been reported.</p

    Premise Selection for Mathematics by Corpus Analysis and Kernel Methods

    Get PDF
    Smart premise selection is essential when using automated reasoning as a tool for large-theory formal proof development. A good method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a benchmark consisting of 2078 large-theory mathematical problems is constructed,extending the older MPTP Challenge benchmark. The combined effect of the techniques results in a 50% improvement on the benchmark over the Vampire/SInE state-of-the-art system for automated reasoning in large theories.Comment: 26 page
    corecore